Goto

Collaborating Authors

 variational approach


Variational PDEs for Acceleration on Manifolds and Application to Diffeomorphisms

Neural Information Processing Systems

We consider the optimization of cost functionals on manifolds and derive a variational approach to accelerated methods on manifolds. We demonstrate the methodology on the infinite-dimensional manifold of diffeomorphisms, motivated by registration problems in computer vision. We build on the variational approach to accelerated optimization by Wibisono, Wilson and Jordan, which applies in finite dimensions, and generalize that approach to infinite dimensional manifolds. We derive the continuum evolution equations, which are partial differential equations (PDE), and relate them to simple mechanical principles. Our approach can also be viewed as a generalization of the $L^2$ optimal mass transport problem. Our approach evolves an infinite number of particles endowed with mass, represented as a mass density.


A Variational Approach for Learning from Positive and Unlabeled Data

Neural Information Processing Systems

Learning binary classifiers only from positive and unlabeled (PU) data is an important and challenging task in many real-world applications, including web text classification, disease gene identification and fraud detection, where negative samples are difficult to verify experimentally. Most recent PU learning methods are developed based on the misclassification risk of the supervised learning type, and they may suffer from inaccurate estimates of class prior probabilities. In this paper, we introduce a variational principle for PU learning that allows us to quantitatively evaluate the modeling error of the Bayesian classifier directly from given data. This leads to a loss function which can be efficiently calculated without involving class prior estimation or any other intermediate estimation problems, and the variational learning method can then be employed to optimize the classifier under general conditions. We illustrate the effectiveness of the proposed variational method on a number of benchmark examples.


Variational PDEs for Acceleration on Manifolds and Application to Diffeomorphisms

Neural Information Processing Systems

We consider the optimization of cost functionals on manifolds and derive a variational approach to accelerated methods on manifolds. We demonstrate the methodology on the infinite-dimensional manifold of diffeomorphisms, motivated by registration problems in computer vision. We build on the variational approach to accelerated optimization by Wibisono, Wilson and Jordan, which applies in finite dimensions, and generalize that approach to infinite dimensional manifolds. We derive the continuum evolution equations, which are partial differential equations (PDE), and relate them to simple mechanical principles. Our approach can also be viewed as a generalization of the $L^2$ optimal mass transport problem. Our approach evolves an infinite number of particles endowed with mass, represented as a mass density.


Deep Variational Instance Segmentation

Neural Information Processing Systems

Instance segmentation, which seeks to obtain both class and instance labels for each pixel in the input image, is a challenging task in computer vision. State-of-the-art algorithms often employ a search-based strategy, which first divides the output image with a regular grid and generate proposals at each grid cell, then the proposals are classified and boundaries refined.


From MAP to Marginals: Variational Inference in Bayesian Submodular Models

Neural Information Processing Systems

Submodular optimization has found many applications in machine learning and beyond. We carry out the first systematic investigation of inference in probabilistic models defined through submodular functions, generalizing regular pairwise MRFs and Determinantal Point Processes. In particular, we present L-Field, a variational approach to general log-submodular and log-supermodular distributions based on sub-and supergradients. We obtain both lower and upper bounds on the log-partition function, which enables us to compute probability intervals for marginals, conditionals and marginal likelihoods. We also obtain fully factorized approximate posteriors, at the same computational cost as ordinary submodular optimization. Our framework results in convex problems for optimizing over differentials of submodular functions, which we show how to optimally solve. We provide theoretical guarantees of the approximation quality with respect to the curvature of the function. We further establish natural relations between our variational approach and the classical mean-field method. Lastly, we empirically demonstrate the accuracy of our inference scheme on several submodular models.


A variational approach to dimension-free self-normalized concentration

Chugg, Ben, Ramdas, Aaditya

arXiv.org Machine Learning

We study the self-normalized concentration of vector-valued stochastic processes. We focus on bounds for sub-$ψ$ processes, a tail condition that encompasses a wide variety of well-known distributions (including sub-exponential, sub-Gaussian, sub-gamma, and sub-Poisson distributions). Our results recover and generalize the influential bound of Abbasi-Yadkori et al. (2011) and fill a gap in the literature between determinant-based bounds and those based on condition numbers. As applications we prove a Bernstein inequality for random vectors satisfying a moment condition (which is more general than boundedness), and also provide the first dimension-free, self-normalized empirical Bernstein inequality. Our techniques are based on the variational (PAC-Bayes) approach to concentration.


Congratulations to the #ICML2025 award winners!

AIHub

While pressing, this narrow focus overlooks critical human-centric considerations that shape the long-term trajectory of a society. In this position paper, we identify the risks of overlooking the impact of AI on the future of work and recommend comprehensive transition support towards the evolution of meaningful labor with human agency. Through the lens of economic theories, we highlight the intertemporal impacts of AI on human livelihood and the structural changes in labor markets that exacerbate income inequality. Additionally, the closed-source approach of major stakeholders in AI development resembles rent-seeking behavior through exploiting resources, breeding mediocrity in creative labor, and monopolizing innovation.


Inversion of Magnetic Data using Learned Dictionaries and Scale Space

Ahamed, Shadab, Ghyselincks, Simon, Arias, Pablo Chang Huang, Kloiber, Julian, Ranjbar, Yasin, Tang, Jingrong, Zakariaei, Niloufar, Haber, Eldad

arXiv.org Artificial Intelligence

Magnetic data inversion is an important tool in geophysics, used to infer subsurface magnetic susceptibility distributions from surface magnetic field measurements. This inverse problem is inherently ill-posed, characterized by non-unique solutions, depth ambiguity, and sensitivity to noise. Traditional inversion approaches rely on predefined regularization techniques to stabilize solutions, limiting their adaptability to complex or diverse geological scenarios. In this study, we propose an approach that integrates variable dictionary learning and scale-space methods to address these challenges. Our method employs learned dictionaries, allowing for adaptive representation of complex subsurface features that are difficult to capture with predefined bases. Additionally, we extend classical variational inversion by incorporating multi-scale representations through a scale-space framework, enabling the progressive introduction of structural detail while mitigating overfitting. We implement both fixed and dynamic dictionary learning techniques, with the latter introducing iteration-dependent dictionaries for enhanced flexibility. Using a synthetic dataset to simulate geological scenarios, we demonstrate significant improvements in reconstruction accuracy and robustness compared to conventional variational and dictionary-based methods. Our results highlight the potential of learned dictionaries, especially when coupled with scale-space dynamics, to improve model recovery and noise handling. These findings underscore the promise of our data-driven approach for advance magnetic data inversion and its applications in geophysical exploration, environmental assessment, and mineral prospecting.


Review for NeurIPS paper: A Variational Approach for Learning from Positive and Unlabeled Data

Neural Information Processing Systems

The trustworthiness of f_p In spite of the clear assumption statements, I have concerns of utilizing f_p in the given setting. I am comfortable up to the derivation of Theorem 6 and Eq 6. However, the authors use Eq 7 to optimize the KL divergence, and Eq 7 uses the expectation with the distribution of f_p. While the paper asserts that the distribution function, f_p, can be approximated by the positive dataset. However, Algorithm 1 uses the sample minibatch of B P to empirically estimate f_p.


Review for NeurIPS paper: A Variational Approach for Learning from Positive and Unlabeled Data

Neural Information Processing Systems

This paper presents an improved method for learning binary classifiers from positive and unlabeled data. Prior work has required the specification of the proportion of positive data in the unlabeled data set. This parameter is difficult to estimate and the resulting classifier is sensitive to it. While this paper is not the first to attempt to do away with the class prior estimation problem, this paper reports better empirical performance with theoretical results on consistency. As noted by all of the reviewers, the paper is very clearly written and helpfully provides a summary table comparing and contrasting prior work with the current work.